翻訳と辞書
Words near each other
・ Binary compounds of hydrogen
・ Binary compounds of silicon
・ Binary constraint
・ Binary cycle
・ Binary cyclic group
・ Binary data
・ Binary decision
・ Binary decision diagram
・ Binary decoder
・ Binary delta compression
・ Binary distribution
・ Binary distribution (disambiguation)
・ Binary Divide
・ Binary Domain
・ Binary economics
Binary entropy function
・ Binary erasure channel
・ Binary ethylenimine
・ Binary explosive
・ Binary expression tree
・ Binary file
・ Binary File Descriptor library
・ Binary Finary
・ Binary form
・ Binary form (disambiguation)
・ Binary Format Description language
・ Binary function
・ Binary game
・ Binary GCD algorithm
・ Binary Golay code


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Binary entropy function : ウィキペディア英語版
Binary entropy function

In information theory, the binary entropy function, denoted \operatorname H(p) or \operatorname H_\text(p), is defined as the entropy of a Bernoulli process with probability of success p. Mathematically, the Bernoulli trial is modelled as a random variable X that can take on only two values: 0 and 1. The event X = 1 is considered a success and the event X = 0 is considered a failure. (These two events are mutually exclusive and exhaustive.)
If \operatorname(X=1) = p, then \operatorname(X=0) = 1-p and the entropy of X (in shannons) is given by
:\operatorname H(X) = \operatorname H_\text(p) = -p \log_2 p - (1 - p) \log_2 (1 - p),
where 0 \log_2 0 is taken to be 0. The logarithms in this formula are usually taken (as shown in the graph) to the base 2. See ''binary logarithm''.
When p=\tfrac 1 2, the binary entropy function attains its maximum value. This is the case of the unbiased bit, the most common unit of information entropy.
\operatorname H(p) is distinguished from the entropy function \operatorname H(X) in that the former takes a single real number as a parameter whereas the latter takes a distribution or random variables as a parameter.
Sometimes the binary entropy function is also written as \operatorname H_2(p).
However, it is different from and should not be confused with the Rényi entropy, which is denoted as \operatorname H_2(X).
==Explanation==
In terms of information theory, ''entropy'' is considered to be a measure of the uncertainty in a message. To put it intuitively, suppose p=0. At this probability, the event is certain never to occur, and so there is no uncertainty at all, leading to an entropy of 0. If p=1, the result is again certain, so the entropy is 0 here as well. When p=1/2, the uncertainty is at a maximum; if one were to place a fair bet on the outcome in this case, there is no advantage to be gained with prior knowledge of the probabilities. In this case, the entropy is maximum at a value of 1 bit. Intermediate values fall between these cases; for instance, if p=1/4, there is still a measure of uncertainty on the outcome, but one can still predict the outcome correctly more often than not, so the uncertainty measure, or entropy, is less than 1 full bit.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Binary entropy function」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.